Análisis de clases latentes: modelos seleccionados, sin pueblo originario, pero con año y recategorización de edad mujer y semana gestacional hito 1, caracterización de clases y medidas de ajuste (glca)
script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js" $(document).ready(function() {
$('body').prepend('<div class=\"zoomDiv\"><img src=\"\" class=\"zoomImg\"></div>');
// onClick function for all plots (img's)
$('img:not(.zoomImg)').click(function() {
$('.zoomImg').attr('src', $(this).attr('src')).css({width: '100%'});
$('.zoomDiv').css({opacity: '1', width: 'auto', border: '1px solid white', borderRadius: '5px', position: 'fixed', top: '50%', left: '50%', marginRight: '-50%', transform: 'translate(-50%, -50%)', boxShadow: '0px 0px 50px #888888', zIndex: '50', overflow: 'auto', maxHeight: '100%'});
});
// onClick function for zoomImg
$('img.zoomImg').click(function() {
$('.zoomDiv').css({opacity: '0', width: '0%'});
});
});
<script src="hideOutput.js"></script> $(document).ready(function() {
$chunks = $('.fold');
$chunks.each(function () { // add button to source code chunks
if ( $(this).hasClass('s') ) {
$('pre.r', this).prepend("<div class=\"showopt\">Show Source</div><br style=\"line-height:22px;\"/>");
$('pre.r', this).children('code').attr('class', 'folded');
} // add button to output chunks
if ( $(this).hasClass('o') ) {
$('pre:not(.r)', this).has('code').prepend("<div class=\"showopt\">Show Output</div><br style=\"line-height:22px;\"/>");
$('pre:not(.r)', this).children('code:not(r)').addClass('folded'); // add button to plots
$(this).find('img').wrap('<pre class=\"plot\"></pre>');
$('pre.plot', this).prepend("<div class=\"showopt\">Show Plot</div><br style=\"line-height:22px;\"/>");
$('pre.plot', this).children('img').addClass('folded');
}
}); // hide all chunks when document is loaded
$('.folded').css('display', 'none') // function to toggle the visibility
$('.showopt').click(function() {
var label = $(this).html();
if (label.indexOf("Show") >= 0) {
$(this).html(label.replace("Show", "Hide"));
} else {
$(this).html(label.replace("Hide", "Show"));
}
$(this).siblings('code, img').slideToggle('fast', 'swing');
});
}); Cargamos los datos
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 652896 34.9 1308657 69.9 919363 49.1
Vcells 1171126 9.0 8388608 64.0 2056718 15.7
#cargar glca con y sin resultado distal
load("data2_lca2_adj4_alt_sin_po_2023_05_12 (3).RData")
Cargamos los paquetes
knitr::opts_chunk$set(echo = TRUE)
if(!require(poLCA)){install.packages("poLCA")}
if(!require(poLCAParallel)){devtools::install_github("QMUL/poLCAParallel@package")}
if(!require(compareGroups)){install.packages("compareGroups")}
if(!require(parallel)){install.packages("parallel")}
if(!require(Hmisc)){install.packages("Hmisc")}
if(!require(tidyverse)){install.packages("tidyverse")}
try(if(!require(sjPlot)){install.packages("sjPlot")})
if(!require(emmeans)){install.packages("emmeans")}
if(!require(nnet)){install.packages("nnet")}
if(!require(here)){install.packages("here")}
if(!require(doParallel)){install.packages("doParallel")}
if(!require(progress)){install.packages("progress")}
if(!require(caret)){install.packages("caret")}
if(!require(rpart)){install.packages("rpart")}
if(!require(rpart.plot)){install.packages("rpart.plot")}
if(!require(partykit)){install.packages("partykit")}
if(!require(randomForest)){install.packages("randomForest")}
if(!require(ggcorrplot)){install.packages("ggcorrplot")}
if(!require(polycor)){install.packages("polycor")}
if(!require(tableone)){install.packages("tableone")}
if(!require(broom)){install.packages("broom")}
if(!require(plotly)){install.packages("plotly")}
if(!require(rsvg)){install.packages("rsvg")}
if(!require(DiagrammeRsvg)){install.packages("DiagrammeRsvg")}
if(!require(effects)){install.packages("effects")}
if(!require(glca)){install.packages("glca")}
Call:
glca(formula = f_preds, data = mydata_preds3, nclass = 6, n.init = 500,
decreasing = T, testiter = 500, maxiter = 10000, seed = seed,
verbose = FALSE)
Manifest items : AÑO CAUSAL EDAD_MUJER_REC PAIS_ORIGEN_REC HITO1_EDAD_GEST_SEM_REC MACROZONA PREV_TRAMO_REC
Categories for manifest items :
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
AÑO 2 3 4 5 6
CAUSAL 2 3 4
EDAD_MUJER_REC 1 2 3 4
PAIS_ORIGEN_REC 1 2 3
HITO1_EDAD_GEST_SEM_REC 1 2 3 4
MACROZONA 1 2 3 4 5 6
PREV_TRAMO_REC 1 2 3 4 5
Model : Latent class analysis
Number of latent classes : 6
Number of observations : 3789
Number of parameters : 143
log-likelihood : -28000.84
G-squared : 3795.407
AIC : 56287.68
BIC : 57179.98
Marginal prevalences for latent classes :
Class 1 Class 2 Class 3 Class 4 Class 5 Class 6
0.05654 0.15216 0.12093 0.50826 0.04419 0.11792
Class prevalences by group :
Class 1 Class 2 Class 3 Class 4 Class 5 Class 6
ALL 0.05654 0.15216 0.12093 0.50826 0.04419 0.11792
Item-response probabilities :
AÑO
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5
Class 1 1.0000 0.0000 0.0000 0.0000 0.0000
Class 2 0.1783 0.1892 0.2013 0.1921 0.2392
Class 3 0.1608 0.2181 0.1843 0.2404 0.1963
Class 4 0.1396 0.2457 0.1749 0.2391 0.2007
Class 5 0.1365 0.1735 0.2447 0.1212 0.3241
Class 6 0.1112 0.2391 0.1874 0.2646 0.1977
CAUSAL
Y = 1 Y = 2 Y = 3
Class 1 0.6179 0.3821 0.0000
Class 2 0.0057 0.0000 0.9943
Class 3 0.1883 0.8117 0.0000
Class 4 0.3845 0.6155 0.0000
Class 5 0.0564 0.0014 0.9421
Class 6 0.4459 0.5541 0.0000
EDAD_MUJER_REC
Y = 1 Y = 2 Y = 3 Y = 4
Class 1 0.5084 0.0248 0.2075 0.2592
Class 2 0.3162 0.3481 0.2461 0.0896
Class 3 0.4943 0.0000 0.0363 0.4694
Class 4 0.5088 0.0167 0.2097 0.2648
Class 5 0.4910 0.1598 0.2301 0.1190
Class 6 0.5468 0.0091 0.1670 0.2771
PAIS_ORIGEN_REC
Y = 1 Y = 2 Y = 3
Class 1 0.084 0.7909 0.1251
Class 2 0.000 0.9943 0.0057
Class 3 0.000 0.9221 0.0779
Class 4 0.000 1.0000 0.0000
Class 5 0.000 0.0000 1.0000
Class 6 0.000 0.0000 1.0000
HITO1_EDAD_GEST_SEM_REC
Y = 1 Y = 2 Y = 3 Y = 4
Class 1 0.0420 0.0881 0.4332 0.4367
Class 2 0.0106 0.9859 0.0035 0.0000
Class 3 0.0140 0.3710 0.5630 0.0520
Class 4 0.0277 0.1859 0.6829 0.1035
Class 5 0.0255 0.9745 0.0000 0.0000
Class 6 0.0266 0.1021 0.7611 0.1102
MACROZONA
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
Class 1 0.0371 0.2237 0.3417 0.1327 0.0850 0.1799
Class 2 0.0036 0.3851 0.1749 0.1452 0.0862 0.2050
Class 3 0.0000 0.7239 0.0923 0.0496 0.0692 0.0650
Class 4 0.0000 0.3360 0.1770 0.2024 0.0973 0.1872
Class 5 0.0060 0.5515 0.0880 0.0176 0.2908 0.0461
Class 6 0.0000 0.5973 0.0850 0.0612 0.2133 0.0432
PREV_TRAMO_REC
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5
Class 1 0.0310 0.0205 0.6286 0.2613 0.0585
Class 2 0.0018 0.0696 0.7240 0.1995 0.0053
Class 3 0.0000 0.8253 0.0000 0.1671 0.0076
Class 4 0.0006 0.0307 0.6140 0.3546 0.0000
Class 5 0.0294 0.0065 0.5252 0.1644 0.2746
Class 6 0.0007 0.0113 0.6121 0.2998 0.0760
#https://rdrr.io/cran/glca/src/R/summary.glca.R
#Class prevalence by group mean(best_model_glca_w_distal_outcome$posterior$ALL$`Class 1`)
#Item-response probabilities (most likely response) print(sapply(best_model_glca_w_distal_outcome$param$rho[[1]],
#print(sapply(best_model_glca_w_distal_outcome$param$rho$ALL, function(m) apply(m, 1, which.max)))
#function(m) apply(m, 1, which.max)))
Figure 1: Selected Model
Figure 2: Selected Model
Figure 3: Selected Model
Figure 4: Selected Model
Figure 5: Selected Model
Figure 6: Selected Model
Figure 7: Selected Model
Figure 8: Selected Model
rho_glca<-
do.call("bind_rows",best_model_glca$param$rho$ALL) %>%
t() %>%
round(2) %>%
data.table::data.table(keep.rownames = T) %>%
magrittr::set_colnames(c("variables", paste0("Class",1:length(best_model_glca$param$gamma)))) %>%
tidyr::separate(variables, into=c("var", "prob"), sep=".Y =")
lcmodel_glca <- reshape2::melt(rho_glca, level=2) %>% dplyr::rename("class"="variable")
traductor_cats <- readxl::read_excel("tabla12_corr.xlsx") %>%
dplyr::mutate(level=readr::parse_double(level)) %>%
dplyr::mutate(level= dplyr::case_when(grepl("CAUSAL",Name)~ level-1,T~level)) %>%
dplyr::mutate(level= dplyr::case_when(grepl("AÑO",Name)~ level-1,T~level)) %>%
dplyr::mutate(charactersitic=gsub(" \\(%\\)", "", Name))
lcmodel_glca<- lcmodel_glca %>%
dplyr::mutate(pr=as.numeric(gsub("[^0-9.]+", "", prob))) %>%
dplyr::left_join(traductor_cats[,c("charactersitic", "level", "CATEGORIA")], by= c("var"="charactersitic", "pr"="level"))
#dplyr::mutate(CATEGORIA= dplyr::case_when(var=="AÑO" & prob==" 1"~"Perdidos", T~CATEGORIA))
lcmodel_glca$text_label<-paste0("Categoria:",lcmodel_glca$CATEGORIA,"<br>%: ",scales::percent(lcmodel_glca$value))
zp3 <- ggplot(lcmodel_glca,aes(x = var, y = value, fill = factor(pr), label=text_label))
zp3 <- zp3 + geom_bar(stat = "identity", position = "stack")
zp3 <- zp3 + facet_grid(class ~ .)
zp3 <- zp3 + scale_fill_brewer(type="seq", palette="Greys", na.value = "white") +theme_bw()
zp3 <- zp3 + labs(y = "Porcentaje de probabilidad de respuesta",
x = "",
fill ="Cateorías de\nRespuesta")
zp3 <- zp3 + theme( axis.text.y=element_blank(),
axis.ticks.y=element_blank(),
panel.grid.major.y=element_blank())
zp3 <- zp3 + guides(fill = guide_legend(reverse=TRUE))
zp3 <- zp3 + theme(axis.text.x = element_text(angle = 30, hjust = 1))
#print(zp1)
ggplotly(zp3, tooltip = c("text_label"))%>% layout(xaxis= list(showticklabels = T),height=600, width=800)
Figure 9: Selected Model
Call:
glca(formula = f_adj, data = mydata_preds3, nclass = 7, n.init = 500,
decreasing = T, testiter = 500, maxiter = 10000, seed = seed,
verbose = FALSE)
Manifest items : AÑO CAUSAL EDAD_MUJER_REC PAIS_ORIGEN_REC HITO1_EDAD_GEST_SEM_REC MACROZONA PREV_TRAMO_REC
Covariates (Level 1) : outcome
Categories for manifest items :
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
AÑO 2 3 4 5 6
CAUSAL 2 3 4
EDAD_MUJER_REC 1 2 3 4
PAIS_ORIGEN_REC 1 2 3
HITO1_EDAD_GEST_SEM_REC 1 2 3 4
MACROZONA 1 2 3 4 5 6
PREV_TRAMO_REC 1 2 3 4 5
Model : Latent class analysis
Number of latent classes : 7
Number of observations : 3789
Number of parameters : 173
log-likelihood : -27801.08
G-squared : 5014.013
AIC : 55948.15
BIC : 57027.65
Marginal prevalences for latent classes :
Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7
0.05225 0.12116 0.15229 0.12730 0.40283 0.04512 0.09906
Class prevalences by group :
Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7
ALL 0.05225 0.12116 0.15229 0.1273 0.40283 0.04512 0.09906
Logistic regression coefficients :
Class 1/7 Class 2/7 Class 3/7 Class 4/7 Class 5/7
(Intercept) -1.0711 1.6402 0.1435 -1.1488 1.8684
outcome1 0.4737 -1.9441 0.3169 1.4885 -0.5415
Class 6/7
(Intercept) -1.3631
outcome1 0.6295
Item-response probabilities :
AÑO
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5
Class 1 0.9341 0.0000 0.0019 0.0640 0.0000
Class 2 0.1881 0.2480 0.2299 0.1971 0.1369
Class 3 0.1787 0.1892 0.2029 0.1915 0.2377
Class 4 0.1629 0.2023 0.1899 0.2466 0.1983
Class 5 0.1400 0.2480 0.1523 0.2410 0.2187
Class 6 0.1380 0.1716 0.2450 0.1223 0.3232
Class 7 0.1113 0.2384 0.1947 0.2627 0.1928
CAUSAL
Y = 1 Y = 2 Y = 3
Class 1 0.7388 0.2612 0.0000
Class 2 0.0527 0.9473 0.0000
Class 3 0.0000 0.0066 0.9934
Class 4 0.1759 0.8241 0.0000
Class 5 0.4730 0.5270 0.0000
Class 6 0.0561 0.0209 0.9230
Class 7 0.4907 0.5093 0.0000
EDAD_MUJER_REC
Y = 1 Y = 2 Y = 3 Y = 4
Class 1 0.5246 0.0228 0.2119 0.2407
Class 2 0.3374 0.0326 0.1655 0.4646
Class 3 0.3153 0.3497 0.2456 0.0894
Class 4 0.4935 0.0000 0.0351 0.4714
Class 5 0.5531 0.0122 0.2234 0.2112
Class 6 0.4898 0.1566 0.2314 0.1222
Class 7 0.5793 0.0060 0.1675 0.2472
PAIS_ORIGEN_REC
Y = 1 Y = 2 Y = 3
Class 1 0.0853 0.7945 0.1201
Class 2 0.0024 0.8955 0.1021
Class 3 0.0000 0.9944 0.0056
Class 4 0.0000 0.9279 0.0721
Class 5 0.0000 0.9836 0.0164
Class 6 0.0000 0.0000 1.0000
Class 7 0.0000 0.0000 1.0000
HITO1_EDAD_GEST_SEM_REC
Y = 1 Y = 2 Y = 3 Y = 4
Class 1 0.0467 0.0687 0.3900 0.4945
Class 2 0.0000 0.0000 0.5140 0.4860
Class 3 0.0104 0.9861 0.0035 0.0000
Class 4 0.0135 0.3928 0.5707 0.0231
Class 5 0.0350 0.2300 0.7273 0.0077
Class 6 0.0250 0.9750 0.0000 0.0000
Class 7 0.0309 0.0910 0.8192 0.0589
MACROZONA
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
Class 1 0.0404 0.1673 0.4148 0.1293 0.0696 0.1786
Class 2 0.0000 0.3434 0.1257 0.2738 0.0922 0.1649
Class 3 0.0035 0.3841 0.1747 0.1464 0.0862 0.2052
Class 4 0.0000 0.7205 0.0972 0.0462 0.0711 0.0650
Class 5 0.0000 0.3370 0.1822 0.1829 0.1035 0.1944
Class 6 0.0058 0.5465 0.0897 0.0170 0.2921 0.0489
Class 7 0.0000 0.6401 0.0774 0.0398 0.2210 0.0218
PREV_TRAMO_REC
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5
Class 1 0.0215 0.0272 0.6448 0.2438 0.0626
Class 2 0.0082 0.0582 0.5991 0.3346 0.0000
Class 3 0.0018 0.0695 0.7241 0.1994 0.0053
Class 4 0.0000 0.7295 0.0542 0.2072 0.0091
Class 5 0.0000 0.0357 0.6123 0.3520 0.0000
Class 6 0.0292 0.0066 0.5235 0.1647 0.2761
Class 7 0.0000 0.0222 0.6002 0.2924 0.0852
rho_glca_adj<-
do.call("bind_rows",best_model_glca_w_distal_outcome$param$rho$ALL) %>%
t() %>%
round(2) %>%
data.table::data.table(keep.rownames = T) %>%
magrittr::set_colnames(c("variables", paste0("Class",1:dim(best_model_glca_w_distal_outcome$param$gamma[[1]])[[2]]))) %>%
tidyr::separate(variables, into=c("var", "prob"), sep=".Y =")
lcmodel_glca_adj <- reshape2::melt(rho_glca_adj, level=2) %>% dplyr::rename("class"="variable")
lcmodel_glca_adj<- lcmodel_glca_adj %>%
dplyr::mutate(pr=as.numeric(gsub("[^0-9.]+", "", prob))) %>%
dplyr::left_join(traductor_cats[,c("charactersitic", "level", "CATEGORIA")], by= c("var"="charactersitic", "pr"="level")) %>%
dplyr::mutate(CATEGORIA= dplyr::case_when(var=="AÑO" & prob==" 1"~"Perdidos", T~CATEGORIA))
lcmodel_glca_adj$text_label<-paste0("Categoria:",lcmodel_glca_adj$CATEGORIA,"<br>%: ",scales::percent(lcmodel_glca_adj$value))
zp4 <- ggplot(lcmodel_glca_adj,aes(x = var, y = value, fill = factor(pr), label=text_label))
zp4 <- zp4 + geom_bar(stat = "identity", position = "stack")
zp4 <- zp4 + facet_grid(class ~ .)
zp4 <- zp4 + scale_fill_brewer(type="seq", palette="Greys", na.value = "white") +theme_bw()
zp4 <- zp4 + labs(y = "Porcentaje de probabilidad de respuesta",
x = "",
fill ="Cateorías de\nRespuesta")
zp4 <- zp4 + theme( axis.text.y=element_blank(),
axis.ticks.y=element_blank(),
panel.grid.major.y=element_blank())
zp4 <- zp4 + guides(fill = guide_legend(reverse=TRUE))
zp4 <- zp4 + theme(axis.text.x = element_text(angle = 30, hjust = 1))
#print(zp1)
ggplotly(zp4, tooltip = c("text_label"))%>% layout(xaxis= list(showticklabels = T),height=600, width=800)
Figure 10: Selected Model
Figure 11: Selected Model
Figure 12: Selected Model
Figure 13: Selected Model
Figure 14: Selected Model
Figure 15: Selected Model
Figure 16: Selected Model
Figure 17: Selected Model
Figure 18: Selected Model
glca_gam_dist_outcome<- best_model_glca_w_distal_outcome$param$gamma[[1]]
glca_gam_dist_outcome<-colnames(paste0("Class",1:length(best_model_glca_w_distal_outcome$param$gamma)))
#conditional probabilities
#Pr(B1=1|Class 3)
posteriors_glca_adj <- data.frame(best_model_glca_w_distal_outcome$posterior,
predclass=best_model_glca_w_distal_outcome$param$gamma)
classification_table_adj <- plyr::ddply(posteriors, "predclass", function(x) colSums(x[,1:length(LCA_best_model_adj_mod$P)]))
clasification_errors_adj<- 1 - sum(diag(as.matrix(classification_table_adj[,2:(length(LCA_best_model_adj_mod$P)+1)]))) / sum(classification_table_adj[,2:(length(LCA_best_model_adj_mod$P)+1)])
warning(paste("Error de clasificación: ", round(clasification_errors_adj,2)))
entropy_alt <- function(p) sum(-p * log(p))
error_prior_adj <- entropy_alt(LCA_best_model_adj_mod$P) # Class proportions
error_post_adj <- mean(apply(LCA_best_model_adj_mod$posterior, 1, entropy_alt),na.rm=T)
R2_entropy_alt_adj <- (error_prior_adj - error_post_adj) / error_prior_adj
warning(paste("Entropía: ", round(R2_entropy_alt_adj,2)))
#https://stackoverflow.com/questions/72783185/entropy-calculation-gives-nan-is-applying-na-omit-a-valid-tweak
entropy.safe <- function (p) {
if (any(p > 1 | p < 0)) stop("probability must be between 0 and 1")
log.p <- numeric(length(p))
safe <- p != 0
log.p[safe] <- log(p[safe])
sum(-p * log.p)
}
error_prior2_adj <- entropy.safe(LCA_best_model_adj_mod$P) # Class proportions
error_post2_adj <- mean(apply(LCA_best_model_adj_mod$posterior, 1, entropy.safe),na.rm=T)
R2_entropy_safe_adj <- (error_prior2_adj - error_post2_adj) / error_prior2_adj
warning(paste("Entropía segura: ", round(R2_entropy_safe,2)))
#https://gist.github.com/daob/c2b6d83815ddd57cde3cebfdc2c267b3
warning(paste("Entropía (solución Oberski): ", round(entropy.R2(LCA_best_model_adj_mod),2)))
#\#minimum average posterior robability of cluster membership (\>0.7), interpretability (classes are clearly distinguishable), and parsimony (each class has a sufficient sample size for further analysis; n≥50).
Class 1 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 0.3170 -1.1488 0.5630 -2.041 0.04136 *
outcome1 4.4304 1.4885 0.5259 2.830 0.00468 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Class 2 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 0.2559 -1.3631 0.4255 -3.204 0.00137 **
outcome1 1.8766 0.6295 0.4227 1.489 0.13648
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Class 3 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 6.4780 1.8684 0.2340 7.985 1.86e-15 ***
outcome1 0.5819 -0.5415 0.2190 -2.473 0.0135 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Class 4 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 0.3426 -1.0711 0.6032 -1.776 0.0759 .
outcome1 1.6059 0.4737 0.5790 0.818 0.4133
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Class 5 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 1.1543 0.1435 0.2661 0.539 0.590
outcome1 1.3728 0.3169 0.2605 1.216 0.224
Class 6 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 5.1564 1.6402 0.2592 6.328 2.80e-10 ***
outcome1 0.1431 -1.9441 0.2527 -7.694 1.83e-14 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
save.image("data2_lca3_glca_sin_po.RData")
require(tidyverse)
sesion_info <- devtools::session_info()
dplyr::select(
tibble::as_tibble(sesion_info$packages),
c(package, loadedversion, source)
) %>%
DT::datatable(filter = 'top', colnames = c('Row number' =1,'Variable' = 2, 'Percentage'= 3),
caption = htmltools::tags$caption(
style = 'caption-side: top; text-align: left;',
'', htmltools::em('Packages')),
options=list(
initComplete = htmlwidgets::JS(
"function(settings, json) {",
"$(this.api().tables().body()).css({
'font-family': 'Helvetica Neue',
'font-size': '50%',
'code-inline-font-size': '15%',
'white-space': 'nowrap',
'line-height': '0.75em',
'min-height': '0.5em'
});",#;
"}")))